Although large language models can be prompted for both zero- and few-shot learning, performance drops significantly when no demonstrations are available. In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap by constructing pseudo-demonstrations for a given test input using a raw text corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the nearest neighbors to the test input from the corpus and pairing them with random task labels, and (2) applying a set of techniques to reduce the amount of direct copying the model does from the resulting demonstrations. Evaluation on nine classification datasets shows that Z-ICL outperforms previous zero-shot methods by a significant margin, and is on par with in-context learning with labeled training data in the few-shot setting. Overall, Z-ICL provides a significantly higher estimate of the zero-shot performance levels of a model, and supports future efforts to develop better pseudo-demonstrations that further improve zero-shot results.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone. In the process of building BLOOM--the Big Science Large Open-science Open-access Multilingual language model--our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget. Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization. In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience .
translated by 谷歌翻译
我们介绍了用于科学索赔核查的龙头克切者系统。鉴于科学索赔和含证据的研究摘要,Longchecker预测了一种可靠的标签,并根据索赔和摘要的共享编码,以多任务方式识别支持的基本原理。我们在SCIFact DataSet上执行实验,并发现Longchecker实现了最先进的性能。我们进行分析以了解这种改进的来源,并发现识别声明与报告科学发现之间的关系往往需要了解出现理由的背景。通过根据所有可用上下文进行标记决策,Longchecker在需要这种类型理解的情况下实现更好的性能。此外,我们表明LongChecker能够利用弱域内数据来利用弱势域数据,以方便为科学索赔核查的少量域适应。
translated by 谷歌翻译
预测任务标签和为其预测生成自由文本阐述的自律化模型可以实现与NLP系统更直观的交互。然而,这些模型目前正在接受大量人为的自由文本解释,每个任务都会阻碍更广泛的使用。我们建议使用少数培训例子研究更现实的自律化建立。我们出示2月 - 一个标准化的四个现有英语数据集和相关指标。我们通过2月份广泛探索自然语言提示来确定正确的提示方法。然后,通过使用此提示并缩放模型大小,我们证明了几次拍摄自合合理化的进展。我们展示了这项任务的完善房间仍然有充足的改进空间:人类注册人评估的生成解释的平均合理性最多为51%,而人类解释的合理性是76%。我们希望2月份与我们的拟议方法一起促使社区承担几次拍摄的自我合理化挑战。
translated by 谷歌翻译
很少拍摄的NLP研究非常活跃,但在不相交的研究线程中进行了评估套件,缺乏挑战性熟练的测试设置,并且无法采用仔细的实验​​设计。因此,社区不知道哪种技术最佳,甚至它们优于简单的基线。在回应中,我们制定了Flex原理,这一要求和统一,严谨,有效和成本敏感的少量NLP评估的一系列要求和最佳实践。这些原则包括样本量设计,采用基准设计的新方法,以优化统计准确性和精度,同时保持评估成本可管理。在原则之后,我们释放了Flex基准,其中包括四个几次传输设置,零拍摄评估和涵盖不同NLP任务的公共排行榜。此外,我们统一,一个基于提示的模型,用于几次学习,统一借鉴和FineTuning迅速格式,避免了最近基于及时的基于及时的方法的复杂机器,以便调整下游任务格式到语言模型预介质目标。我们证明,尽管很简单,植物实现了与流行的元学习和基于及时的方法竞争的结果。
translated by 谷歌翻译
确定多个文档的概念提及的练习是自然语言理解中的基本任务。以前关于跨文档Coreference解析(CDCR)的工作通常会考虑新闻中的事件提到,这很少涉及普遍存在的科学和技术的技术概念。这些复杂的概念采用不同的形式或含糊不清的形式,并且具有许多分层级别的粒度(例如,任务和子组织),构成了CDCR的挑战。我们呈现了分层CDCR(H-CDCR)的新任务,其目标是在它们之间联合推断COREREFER集群和层次结构。我们在科学论文中创建SciCo,一个专家注释的H-CDCR数据集,比突出的欧洲ecb +资源大3倍。我们研究了我们为H-CDCR定制的强大基线模型,并突出了未来工作的挑战。
translated by 谷歌翻译
为了评估任何医疗干预的有效性,研究人员必须进行时间 - 密集和高度手动的文献综述。NLP系统可以帮助自动或协助实现这一昂贵的过程。为了支持这一目标,我们发布MS ^ 2(医学研究的多文件摘要),一个超过470K文档的数据集和来自科学文献的20k摘要。此数据集促进了可以在多项研究中评估和聚合矛盾证据的系统的开发,并且是生物医学领域的第一个大型公开可用的多文件摘要数据集。我们试验基于BART的摘要系统,具有前景的早期结果。我们以自由文本和结构形式制定我们的摘要输入和目标,并修改最近提出的指标,以评估我们系统生成的摘要的质量。数据和模型可在https://github.com/allenai/ms2上获得
translated by 谷歌翻译
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining indomain (domain-adaptive pretraining) leads to performance gains, under both high-and low-resource settings. Moreover, adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multiphase adaptive pretraining offers large gains in task performance.
translated by 谷歌翻译
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of high-quality, large-scale labeled scientific data.SCIBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-theart results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.
translated by 谷歌翻译